Home:ALL Converter>Hadoop namenode : Single point of failure

Hadoop namenode : Single point of failure

Ask Time:2010-12-22T01:46:06         Author:rakeshr

Json Formatter

The Namenode in the Hadoop architecture is a single point of failure.

How do people who have large Hadoop clusters cope with this problem?.

Is there an industry-accepted solution that has worked well wherein a secondary Namenode takes over in case the primary one fails ?

Author:rakeshr,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/4502275/hadoop-namenode-single-point-of-failure
Bkkbrad :

Yahoo has certain recommendations for configuration settings at different cluster sizes to take NameNode failure into account. For example:\n\nThe single point of failure in a Hadoop cluster is the NameNode. While the loss of any other machine (intermittently or permanently) does not result in data loss, NameNode loss results in cluster unavailability. The permanent loss of NameNode data would render the cluster's HDFS inoperable.\nTherefore, another step should be taken in this configuration to back up the NameNode metadata\n\nFacebook uses a tweaked version of Hadoop for its data warehouses; it has some optimizations that focus on NameNode reliability. Additionally to the patches available on github, Facebook appears to use AvatarNode specifically for quickly switching between primary and secondary NameNodes. Dhruba Borthakur's blog contains several other entries offering further insights into the NameNode as a single point of failure.\nEdit: Further info about Facebook's improvements to the NameNode.",
2010-12-21T19:51:44
Ravindra babu :

High Availability of Namenode has been introduced with Hadoop 2.x release.\n\nIt can be achieved in two modes - With NFS and With QJM\n\nBut high availability with Quorum Journal Manager (QJM) is preferred option. \n\n\n In a typical HA cluster, two separate machines are configured as NameNodes. At any point in time, exactly one of the NameNodes is in an Active state, and the other is in a Standby state. The Active NameNode is responsible for all client operations in the cluster, while the Standby is simply acting as a slave, maintaining enough state to provide a fast failover if necessary.\n\n\nHave a look at below SE questions, which explains complete failover process. \n\nSecondary NameNode usage and High availability in Hadoop 2.x\n\nHow does Hadoop Namenode failover process works?",
2016-01-18T08:35:42
yy